16  A5: Solving nonlinear equations in 1d III

Enter your name here: YOUR NAME HERE

Approximate time spent on this assignment: …….

⚠ Do not edit the following cell. You may find some of these functions useful
# useful definitions that we've used so far:

using Plots
using LaTeXStrings
using Polynomials
using PrettyTables

function simple_iteration( g, x1; N=100, tol=1e-10 )
    x = [ x1 ]
    for n in 2:N
        push!( x, g(x[n-1]) )
        if (abs(g(x[end]) - x[end]) < tol)
            break
        elseif (x[end] == Inf)
            @warn "simple iteration diverges to Inf";
            break
        elseif (x[end] == -Inf)
            @warn "simple iteration diverges to -Inf";
            break
        end 
    end
    return x
end

function relaxation( f, Ξ», x1; N=100, tol=1e-10)
    x = [x1]
    r = 0.;
    for n in 2:N
        push!( x, x[n-1] - Ξ»*f(x[n-1]) )
        r = abs(f(x[end]));
        if (r < tol)
            return x
        end
    end
    @warn "max interations with |f| = $r";
    return x
end

function Newton( f, f_prime, x1; N=100, tol=1e-10)
    x = [x1]
    for n in 2:N
        push!( x, x[n-1] - f(x[n-1])/f_prime(x[n-1]) )
        r = abs(f(x[end]));
        if (r < tol)
            return x
        end
    end
    @warn "max interations |f| = $r";
    return x
end

function orderOfConvergence( x, ΞΎ; Ξ±=0 )
    err = @. abs(x - ΞΎ)
    logerr = @. log10( err )
    ratios = [NaN; [logerr[i+1] / logerr[i] for i in 1:length(logerr)-1]]
    if (Ξ± == 0) 
        Ξ± = ratios[end]
        Ξ±r = round(Ξ±, sigdigits=3)
    end
    mu = [NaN; [err[i+1] / err[i]^Ξ± for i in 1:length(err)-1]]
    pretty_table( 
         [1:length(x) err logerr ratios mu];
        column_labels = ["iteration", "absolute error", "log error", "alpha", "mu (Ξ± = $Ξ±)" ]
    )
end

function ΞΌ( x, ΞΎ; Ξ±=1 )
    return @. abs( x[2:end] - ΞΎ ) / ( abs(x[1:end-1] - ΞΎ )^Ξ± );
end 

Ο΅ = 1.;
f = ψ -> ψ - Ο΅ * sin(ψ) - 2Ο€; # has a zero at 2Ο€
df = ψ -> 1 - ϡ * cos(ψ)
ΞΎ = 2Ο€
6.283185307179586
g = x -> x^2 - 2
dg = x -> 2x
orderOfConvergence( Newton( g, dg, 1. ), sqrt(2) ) 
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€

β”‚ iteration β”‚ absolute error β”‚ log error β”‚   alpha β”‚ mu (Ξ± = 2.079604050288125 β‹―

β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€

β”‚       1.0 β”‚       0.414214 β”‚ -0.382776 β”‚     NaN β”‚                         N β‹―

β”‚       2.0 β”‚      0.0857864 β”‚  -1.06658 β”‚ 2.78644 β”‚                     0.536 β‹―

β”‚       3.0 β”‚      0.0024531 β”‚  -2.61028 β”‚ 2.44734 β”‚                    0.4053 β‹―

β”‚       4.0 β”‚      2.1239e-6 β”‚  -5.67287 β”‚ 2.17328 β”‚                    0.5694 β‹―

β”‚       5.0 β”‚    1.59472e-12 β”‚  -11.7973 β”‚  2.0796 β”‚                         1 β‹―

└───────────┴────────────────┴───────────┴─────────┴────────────────────────────

                                                                1 column omitted

16.1 B. Kepler’s Equation

In lectures, we considered the equation

\begin{align} f(\psi) := \psi - \epsilon \sin \psi - 2\pi = 0 \end{align}

with \epsilon = 0.9. Recall that we showed that the Relaxation Method (x_{n+1} = x_n - \lambda f(x_n)) converged linearly with asymptotic error constant \left| 1 - 0.1 \lambda \right| and Newton’s Method converged faster than quadratically. For the remainder of this question, fix \epsilon = 1. Here’s a plot of f:

plot( f, 0, 10, label=L"f", lw=3 )
hline!( [0] , linestyle=:dash, lw = 3, label=L"y=0")
scatter!( [ΞΎ], [f(ΞΎ)], markersize=5, primary=false )

  1. ✍ Show that 2\pi is the unique solution to f = 0.
  2. πŸ’» Show numerically that the relaxation method converges for some choice of \lambda. What is the order of convergence of this method?
  3. πŸ’» Show numerically that Newton’s method converges. What is the order of convergence?
  4. ✍ Compare your answers to this question to the \epsilon = 0.9 case. Why is the convergence slower/faster?
orderOfConvergence( Newton(f, df, 6.), 2Ο€; Ξ±=1  )
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”

β”‚ iteration β”‚ absolute error β”‚ log error β”‚   alpha β”‚ mu (Ξ± = 1) β”‚

β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€

β”‚       1.0 β”‚       0.283185 β”‚ -0.547929 β”‚     NaN β”‚        NaN β”‚

β”‚       2.0 β”‚       0.188537 β”‚ -0.724603 β”‚ 1.32244 β”‚   0.665773 β”‚

β”‚       3.0 β”‚       0.125617 β”‚ -0.900952 β”‚ 1.24337 β”‚   0.666271 β”‚

β”‚       4.0 β”‚      0.0837225 β”‚  -1.07716 β”‚ 1.19558 β”‚   0.666491 β”‚

β”‚       5.0 β”‚      0.0558085 β”‚   -1.2533 β”‚ 1.16352 β”‚   0.666589 β”‚

β”‚       6.0 β”‚      0.0372037 β”‚  -1.42941 β”‚ 1.14052 β”‚   0.666632 β”‚

β”‚       7.0 β”‚      0.0248019 β”‚  -1.60551 β”‚  1.1232 β”‚   0.666651 β”‚

β”‚       8.0 β”‚      0.0165344 β”‚  -1.78161 β”‚ 1.10968 β”‚    0.66666 β”‚

β”‚       9.0 β”‚      0.0110229 β”‚   -1.9577 β”‚ 1.09884 β”‚   0.666664 β”‚

β”‚      10.0 β”‚     0.00734859 β”‚   -2.1338 β”‚ 1.08995 β”‚   0.666665 β”‚

β”‚      11.0 β”‚     0.00489906 β”‚  -2.30989 β”‚ 1.08253 β”‚   0.666666 β”‚

β”‚      12.0 β”‚     0.00326604 β”‚  -2.48598 β”‚ 1.07623 β”‚   0.666666 β”‚

β”‚      13.0 β”‚     0.00217736 β”‚  -2.66207 β”‚ 1.07083 β”‚   0.666667 β”‚

β”‚      14.0 β”‚     0.00145157 β”‚  -2.83816 β”‚ 1.06615 β”‚   0.666667 β”‚

β”‚      15.0 β”‚    0.000967715 β”‚  -3.01425 β”‚ 1.06204 β”‚   0.666667 β”‚

β”‚      16.0 β”‚    0.000645144 β”‚  -3.19034 β”‚ 1.05842 β”‚   0.666667 β”‚

β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

16.2 D. Research Task

Find a research paper explaining a method named after one of the following people: Halley, Householder, Osada, Ostrowski, Steffensen, Traub. What is the main novelty of this method? How does it (claim to) improve upon previous methods in the literature? Implement your chosen method and test it on a function of your choice.

Please clearly cite which paper you are describing.